Wind turbine wake modelling is of crucial importance to accurate resource assessment, to layout optimisation, and to the operational control of wind farms. This work proposes a surrogate model for the representation of wind turbine wakes based on a state-of-the-art graph representation learning method termed a graph neural network. The proposed end-to-end deep learning model operates directly on unstructured meshes and has been validated against high-fidelity data, demonstrating its ability to rapidly make accurate 3D flow field predictions for various inlet conditions and turbine yaw angles. The specific graph neural network model employed here is shown to generalise well to unseen data and is less sensitive to over-smoothing compared to common graph neural networks. A case study based upon a real world wind farm further demonstrates the capability of the proposed approach to predict farm scale power generation. Moreover, the proposed graph neural network framework is flexible and highly generic and as formulated here can be applied to any steady state computational fluid dynamics simulations on unstructured meshes.
translated by 谷歌翻译
给定部分微分方程(PDE),面向目标的误差估计使我们能够了解诊断数量的兴趣数量(QOI)或目标的错误如何发生并积累在数值近似中,例如使用有限元方法。通过将误差估计分解为来自各个元素的贡献,可以制定适应方法,该方法可以修改网格,以最大程度地减少所得QOI误差的目的。但是,标准误差估计公式涉及真实的伴随解决方案,这在实践中是未知的。因此,通常的做法是用“富集”的近似值(例如,在更高的空间或精制的网格上)近似。这样做通常会导致计算成本的显着增加,这可能是损害(面向目标)自适应模拟的竞争力的瓶颈。本文的核心思想是通过选择性更换昂贵的误差估计步骤,并使用适当的配置和训练的神经网络开发“数据驱动”目标的网格适应方法。这样,甚至可以在不构造富集空间的情况下获得误差估计器。此处采用了逐元构造,该元素构造与网格几何相关的各种参数的局部值和基础问题物理物理作为输入,并且对误差估计器的相应贡献作为输出。我们证明,这种方法能够以降低的计算成本获得相同的准确性,对于与潮汐涡轮机周围流动相关的自适应网格测试用例,这些测试用例是通过其下游唤醒相互作用的,以及农场的整体功率输出作为将其视为QOI。此外,我们证明了元素元素方法意味着培训成本相当低。
translated by 谷歌翻译
直接从图像中提取流体运动的信息具有挑战性。流体流量代表一个由Navier-Stokes方程控制的复杂动态系统。一般的光流法通常是为刚体运动设计的,因此如果直接应用于流体运动估计,则努力挣扎。此外,光流方法仅专注于两个连续的帧而不利用历史时间信息,而流体运动(速度场)可以被视为受时间依赖性偏微分方程(PDE)约束的连续轨迹。这种差异有可能引起身体上不一致的估计。在这里,我们提出了一种基于学习的预测校正方案,以进行流体流量估计。首先由PDE受限的光流预测器给出估计值,然后由基于物理的校正器来完善。与现有的基于基于学习的学习方法相比,所提出的方法比在基准数据集上的现有基于监督的学习方法相比,表现出竞争性结果。此外,所提出的方法可以推广到复杂的现实世界情景,在这种情况下,地面真理信息实际上是不可知的。最后,实验表明,物理校正器可以通过模仿通常在流体动力学模拟中使用的操作员分裂方法来完善流量估计。
translated by 谷歌翻译
Artificial Intelligence (AI) has become commonplace to solve routine everyday tasks. Because of the exponential growth in medical imaging data volume and complexity, the workload on radiologists is steadily increasing. We project that the gap between the number of imaging exams and the number of expert radiologist readers required to cover this increase will continue to expand, consequently introducing a demand for AI-based tools that improve the efficiency with which radiologists can comfortably interpret these exams. AI has been shown to improve efficiency in medical-image generation, processing, and interpretation, and a variety of such AI models have been developed across research labs worldwide. However, very few of these, if any, find their way into routine clinical use, a discrepancy that reflects the divide between AI research and successful AI translation. To address the barrier to clinical deployment, we have formed MONAI Consortium, an open-source community which is building standards for AI deployment in healthcare institutions, and developing tools and infrastructure to facilitate their implementation. This report represents several years of weekly discussions and hands-on problem solving experience by groups of industry experts and clinicians in the MONAI Consortium. We identify barriers between AI-model development in research labs and subsequent clinical deployment and propose solutions. Our report provides guidance on processes which take an imaging AI model from development to clinical implementation in a healthcare institution. We discuss various AI integration points in a clinical Radiology workflow. We also present a taxonomy of Radiology AI use-cases. Through this report, we intend to educate the stakeholders in healthcare and AI (AI researchers, radiologists, imaging informaticists, and regulators) about cross-disciplinary challenges and possible solutions.
translated by 谷歌翻译
Fake videos represent an important misinformation threat. While existing forensic networks have demonstrated strong performance on image forgeries, recent results reported on the Adobe VideoSham dataset show that these networks fail to identify fake content in videos. In this paper, we propose a new network that is able to detect and localize a wide variety of video forgeries and manipulations. To overcome challenges that existing networks face when analyzing videos, our network utilizes both forensic embeddings to capture traces left by manipulation, context embeddings to exploit forensic traces' conditional dependencies upon local scene content, and spatial attention provided by a deep, transformer-based attention mechanism. We create several new video forgery datasets and use these, along with publicly available data, to experimentally evaluate our network's performance. These results show that our proposed network is able to identify a diverse set of video forgeries, including those not encountered during training. Furthermore, our results reinforce recent findings that image forensic networks largely fail to identify fake content in videos.
translated by 谷歌翻译
现有的球形卷积神经网络(CNN)框架在计算方面既可以扩展又是旋转等值的。连续的方法捕获旋转模棱两可,但通常在计算上是过时的。离散的方法提供了更有利的计算性能,但付出了损失。我们开发了一个混合离散(迪斯科)组卷积,该卷积同时均具有等效性,并且在计算上可扩展到高分辨率。虽然我们的框架可以应用于任何紧凑的组,但我们专注于球体。我们的迪斯科球形卷积不仅表现出$ \ text {so}(3)$ rotational equivariance,而且还表现出一种渐近$ \ text {so}(3)/\ text {so}(so}(so}(2)$ rotationation eporational ecorivarianciancience,对于许多应用程序(其中$ \ text {so}(n)$是特殊的正交组,代表$ n $ dimensions中的旋转)。通过稀疏的张量实现,我们可以在球体上的像素数量进行线性缩放,以供计算成本和内存使用情况。对于4K球形图像,与最有效的替代替代品量球卷积相比,我们意识到节省了$ 10^9 $的计算成本和$ 10^4 $的内存使用情况。我们将迪斯科球形CNN框架应用于球体上的许多基准密集预测问题,例如语义分割和深度估计,在所有这些问题上,我们都达到了最先进的性能。
translated by 谷歌翻译
机械系统自然地在描述其固有对称性的主束上演变。随之而来的配置歧管分解为对称组和内部形状空间,为许多机器人和生物系统的运动提供了深刻的见解。另一方面,差异平坦的属性已实现了各种机器人系统的有效,有效的计划和控制算法。然而,为任意机器人系统找到平坦输出的实际手段仍然是一个悬而未决的问题。在这项工作中,我们在这两个域之间展示了令人惊讶的新连接,这是首次使用对称性直接使用对称性来构建平面输出。我们为捆绑包的琐碎化提供了足够的条件,其中组变量本身是平坦的输出。我们将其称为几何扁平输出,因为它是均衡的(即保持对称性的),并且通常是全局或几乎全球的,因此通常不受其他平坦输出不享受的属性。在这样的琐碎化中,很容易解决运动计划问题,因为组变量的给定轨迹将充分确定精确实现此运动的形状变量的轨迹。我们为机器人系统提供了部分目录,该目录具有几何扁平输出,并为平面火箭,平面空中操纵器和四极管提供了示例。
translated by 谷歌翻译
当从人类行为中推断出奖励功能(无论是演示,比较,物理校正或电子停靠点)时,它已证明对人类进行建模作为做出嘈杂的理性选择,并具有“合理性系数”,以捕获多少噪声或熵我们希望看到人类的行为。无论人类反馈的类型或质量如何,许多现有作品都选择修复此系数。但是,在某些情况下,进行演示可能要比回答比较查询要困难得多。在这种情况下,我们应该期望在示范中看到比比较中更多的噪音或次级临时性,并且应该相应地解释反馈。在这项工作中,我们提倡,将每种反馈类型的实际数据中的理性系数扎根,而不是假设默认值,对奖励学习具有重大的积极影响。我们在模拟反馈以及用户研究的实验中测试了这一点。我们发现,从单一反馈类型中学习时,高估人类理性可能会对奖励准确性和遗憾产生可怕的影响。此外,我们发现合理性层面会影响每种反馈类型的信息性:令人惊讶的是,示威并不总是最有用的信息 - 当人类的行为非常卑鄙时,即使在合理性水平相同的情况下,比较实际上就变得更加有用。 。此外,当机器人确定要要求的反馈类型时,它可以通过准确建模每种类型的理性水平来获得很大的优势。最终,我们的结果强调了关注假定理性级别的重要性,不仅是在从单个反馈类型中学习时,尤其是当代理商从多种反馈类型中学习时,尤其是在学习时。
translated by 谷歌翻译
我们为训练神经网络的时间逻辑约束提供了一种定理证明方法。我们对有限轨迹(LTL $ _F $)的线性时间逻辑的深层嵌入方式,并在Isabelle Theorem prover的高阶逻辑中表征其语义的相关评估功能。然后,我们继续正式化一个损失函数$ \ MATHCAL {l} $,我们正式证明是合理的,并且与函数$ d \ Mathcal {l} $可区分。随后,我们使用Isabelle的自动代码生成机制来生产LTL $ _F $,$ \ MATHCAL {L} $和$ D \ MATHCAL {l} $的OCAML版本,并通过Python的Ocaml绑定与Pytorch集成在一起。我们表明,当用于动态运动的现有深度学习框架中培训时,我们的方法会为常见运动规范模式(例如避免障碍和巡逻)产生预期的结果。我们方法的独特好处是完全严格的训练方法,消除了直接在诸如Python之类的“不安全”编程语言中的逻辑方面临时实施固有的许多风险。
translated by 谷歌翻译
我们描述了一种新型有损压缩方法,称为DIFFC,该方法基于无条件扩散生成模型。与依靠转换编码和量化来限制传输信息的现代压缩方案不同,DIFFC依赖于高斯噪声损坏的像素的有效通信。我们实施了概念证明,并发现尽管缺乏编码器变换,但它的工作原理表现出色,超过了Imagenet 64x64上最先进的生成压缩方法。 DIFFC仅使用单个模型在任意比特率上编码和DENOISE损坏的像素。该方法进一步提供了对渐进编码的支持,即从部分位流进行解码。我们执行速率分析,以更深入地了解其性能,为多元高斯数据以及一般分布的初始结果提供分析结果。此外,我们表明,基于流动的重建可以比祖先采样在高比特率上获得3 dB的增长。
translated by 谷歌翻译